Optimization of Smooth and Strongly Convex Functions

نویسندگان

  • Lijun Zhang
  • Tianbao Yang
  • Rong Jin
  • Xiaofei He
چکیده

A. Proof of Lemma 1 We need the following lemma that characterizes the property of the extra-gradient descent. Lemma 8 (Lemma 3.1 in (Nemirovski, 2005)). Let Z be a convex compact set in Euclidean space E with inner product 〈·, ·〉, let ‖ · ‖ be a norm on E and ‖ · ‖∗ be its dual norm, and let ω(z) : Z 7→ R be a α-strongly convex function with respect to ‖ · ‖. The Bregman distance associated with ω for points z,w ∈ Z is defined as Bω(z,w) = ω(z)− ω(w)− 〈z−w,∇ω(w)〉. Let U be a convex and closed subset of Z, and let z− ∈ Z, let ξ,η ∈ E, and let γ > 0. Consider the points

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Smooth strongly convex interpolation and exact worst-case performance of first-order methods

We show that the exact worst-case performance of fixed-step first-order methods for unconstrained optimization of smooth (possibly strongly) convex functions can be obtained by solving convex programs. Finding the worst-case performance of a black-box first-order method is formulated as an optimization problem over a set of smooth (strongly) convex functions and initial conditions. We develop c...

متن کامل

On the quadratic support of strongly convex functions

In this paper, we first introduce the notion of $c$-affine functions for $c> 0$. Then we deal with some properties of strongly convex functions in real inner product spaces by using a quadratic support function at each point which is $c$-affine. Moreover, a Hyers–-Ulam stability result for strongly convex functions is shown.

متن کامل

Conditional gradient type methods for composite nonlinear and stochastic optimization

In this paper, we present a conditional gradient type (CGT) method for solving a class of composite optimization problems where the objective function consists of a (weakly) smooth term and a strongly convex term. While including this strongly convex term in the subproblems of the classical conditional gradient (CG) method improves its convergence rate for solving strongly convex problems, it d...

متن کامل

Stochastic Coordinate Descent for Nonsmooth Convex Optimization

Stochastic coordinate descent, due to its practicality and efficiency, is increasingly popular in machine learning and signal processing communities as it has proven successful in several large-scale optimization problems , such as l1 regularized regression, Support Vector Machine, to name a few. In this paper, we consider a composite problem where the nonsmoothness has a general structure that...

متن کامل

A NOVEL META-HEURISTIC ALGORITHM: TUG OF WAR OPTIMIZATION

This paper presents a novel population-based meta-heuristic algorithm inspired by the game of tug of war. Utilizing a sport metaphor the algorithm, denoted as Tug of War Optimization (TWO), considers each candidate solution as a team participating in a series of rope pulling competitions.  The  teams  exert  pulling  forces  on  each  other...

متن کامل

Quasi-Gap and Gap Functions for Non-Smooth Multi-Objective Semi-Infinite Optimization Problems

In this paper‎, ‎we introduce and study some new single-valued gap functions for non-differentiable semi-infinite multiobjective optimization problems with locally Lipschitz data‎. ‎Since one of the fundamental properties of gap function for optimization problems is its abilities in characterizing the solutions of the problem in question‎, ‎then the essential properties of the newly introduced ...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2013